-
Notifications
You must be signed in to change notification settings - Fork 15.4k
[MLIR] Introduce RemarkEngine + pluggable remark streaming (YAML/Bitstream) #152474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@llvm/pr-subscribers-mlir @llvm/pr-subscribers-mlir-core Author: Guray Ozen (grypp) ChangesThis patch hooks LLVM’s remarks infrastructure into MLIR.
User-code can directly prints remarks like: Patch is 37.47 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/152474.diff 7 Files Affected:
diff --git a/mlir/include/mlir/IR/MLIRContext.h b/mlir/include/mlir/IR/MLIRContext.h
index ef8dab87f131a..ab8f4aed3fc79 100644
--- a/mlir/include/mlir/IR/MLIRContext.h
+++ b/mlir/include/mlir/IR/MLIRContext.h
@@ -32,6 +32,7 @@ class InFlightDiagnostic;
class Location;
class MLIRContextImpl;
class RegisteredOperationName;
+class RemarkEngine;
class StorageUniquer;
class IRUnit;
@@ -212,6 +213,9 @@ class MLIRContext {
/// Returns the diagnostic engine for this context.
DiagnosticEngine &getDiagEngine();
+ /// Returns the remark engine for this context.
+ std::unique_ptr<RemarkEngine> &getRemarkEngine();
+
/// Returns the storage uniquer used for creating affine constructs.
StorageUniquer &getAffineUniquer();
@@ -245,6 +249,14 @@ class MLIRContext {
/// (attributes, operations, types, etc.).
llvm::hash_code getRegistryHash();
+ /// Set up optimization remarks for the context.
+ void setupOptimizationRemarks(
+ StringRef outputPath, StringRef outputFormat, bool printAsEmitRemarks,
+ const std::optional<std::string> &categoryPassName = std::nullopt,
+ const std::optional<std::string> &categoryMissName = std::nullopt,
+ const std::optional<std::string> &categoryAnalysisName = std::nullopt,
+ const std::optional<std::string> &categoryFailedName = std::nullopt);
+
//===--------------------------------------------------------------------===//
// Action API
//===--------------------------------------------------------------------===//
@@ -281,6 +293,9 @@ class MLIRContext {
}
private:
+ /// Set the remark engine for this context.
+ void setRemarkEngine(std::unique_ptr<RemarkEngine> engine);
+
/// Return true if the given dialect is currently loading.
bool isDialectLoading(StringRef dialectNamespace);
diff --git a/mlir/include/mlir/IR/Remarks.h b/mlir/include/mlir/IR/Remarks.h
new file mode 100644
index 0000000000000..8f7e7a900e7f1
--- /dev/null
+++ b/mlir/include/mlir/IR/Remarks.h
@@ -0,0 +1,454 @@
+//===- Remarks.h - MLIR Optimization Remark ----------------------*- C++-*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines utilities for emitting optimization remarks.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef MLIR_IR_REMARKS_H
+#define MLIR_IR_REMARKS_H
+
+#include "llvm/IR/DiagnosticInfo.h"
+#include "llvm/IR/DiagnosticPrinter.h"
+#include "llvm/Remarks/Remark.h"
+#include "llvm/Remarks/RemarkStreamer.h"
+#include "llvm/Support/ToolOutputFile.h"
+
+#include "mlir/IR/Diagnostics.h"
+#include "mlir/IR/MLIRContext.h"
+
+namespace mlir {
+
+/// Defines different remark kinds that can be used to categorize remarks.
+enum class RemarkKind {
+ OptimizationRemarkUnknown = 0,
+ OptimizationRemarkPass,
+ OptimizationRemarkMissed,
+ OptimizationRemarkFailure,
+ OptimizationRemarkAnalysis,
+};
+
+//===----------------------------------------------------------------------===//
+// Remark Base Class
+//===----------------------------------------------------------------------===//
+class RemarkBase : public llvm::DiagnosticInfo {
+
+public:
+ RemarkBase(RemarkKind remarkKind, DiagnosticSeverity severity,
+ const char *passName, StringRef remarkName, Location loc,
+ std::optional<StringRef> functionName = std::nullopt)
+ : llvm::DiagnosticInfo(makeLLVMKind(remarkKind),
+ makeLLVMSeverity(severity)),
+ remarkKind(remarkKind), functionName(functionName), loc(loc),
+ passName(passName), remarkName(remarkName) {}
+
+ struct SetIsVerbose {};
+
+ struct SetExtraArgs {};
+
+ struct RemarkKeyValue {
+ std::string key;
+ std::string val;
+
+ explicit RemarkKeyValue(StringRef str = "") : key("String"), val(str) {}
+ RemarkKeyValue(StringRef key, Value value);
+ RemarkKeyValue(StringRef key, Type type);
+ RemarkKeyValue(StringRef key, StringRef s);
+ RemarkKeyValue(StringRef key, const char *s)
+ : RemarkKeyValue(key, StringRef(s)) {};
+ RemarkKeyValue(StringRef key, int n);
+ RemarkKeyValue(StringRef key, float n);
+ RemarkKeyValue(StringRef key, long n);
+ RemarkKeyValue(StringRef key, long long n);
+ RemarkKeyValue(StringRef key, unsigned n);
+ RemarkKeyValue(StringRef key, unsigned long n);
+ RemarkKeyValue(StringRef key, unsigned long long n);
+ RemarkKeyValue(StringRef key, bool b)
+ : key(key), val(b ? "true" : "false") {}
+ };
+
+ void insert(StringRef s);
+ void insert(RemarkKeyValue a);
+ void insert(SetIsVerbose v);
+ void insert(SetExtraArgs ea);
+
+ void print(llvm::DiagnosticPrinter &dp) const override;
+ void print() const;
+
+ virtual bool isEnabled() const = 0;
+ Location getLocation() const { return loc; }
+ llvm::remarks::RemarkLocation getRemarkLocation() const;
+ /// Diagnostic -> Remark
+ llvm::remarks::Remark generateRemark() const;
+
+ StringRef getFunction() const {
+ if (functionName)
+ return *functionName;
+ return "<unknown function>";
+ }
+ StringRef getPassName() const { return passName; }
+ StringRef getRemarkName() const { return remarkName; }
+ std::string getMsg() const;
+
+ bool isVerbose() const { return isVerboseRemark; }
+
+ ArrayRef<RemarkKeyValue> getArgs() const { return args; }
+
+ llvm::remarks::Type getRemarkType() const;
+
+protected:
+ /// Keeps the MLIR diagnostic kind, which is used to determine the
+ /// diagnostic kind in the LLVM remark streamer.
+ RemarkKind remarkKind;
+ /// Name of the convering function like interface
+ std::optional<std::string> functionName;
+
+ Location loc;
+ /// Name of the pass that triggers this report.
+ const char *passName;
+
+ /// Textual identifier for the remark (single-word, CamelCase). Can be used
+ /// by external tools reading the output file for optimization remarks to
+ /// identify the remark.
+ StringRef remarkName;
+
+ /// RemarkKeyValues collected via the streaming interface.
+ SmallVector<RemarkKeyValue, 4> args;
+
+ /// The remark is expected to be noisy.
+ bool isVerboseRemark = false;
+
+private:
+ /// Convert the MLIR diagnostic severity to LLVM diagnostic severity.
+ static llvm::DiagnosticSeverity
+ makeLLVMSeverity(DiagnosticSeverity severity) {
+ switch (severity) {
+ case DiagnosticSeverity::Note:
+ return llvm::DiagnosticSeverity::DS_Note;
+ case DiagnosticSeverity::Warning:
+ return llvm::DiagnosticSeverity::DS_Warning;
+ case DiagnosticSeverity::Error:
+ return llvm::DiagnosticSeverity::DS_Error;
+ case DiagnosticSeverity::Remark:
+ return llvm::DiagnosticSeverity::DS_Remark;
+ }
+ llvm_unreachable("Unknown diagnostic severity");
+ }
+ /// Convert the MLIR remark kind to LLVM diagnostic kind.
+ static llvm::DiagnosticKind makeLLVMKind(RemarkKind remarkKind) {
+ switch (remarkKind) {
+ case RemarkKind::OptimizationRemarkUnknown:
+ return llvm::DiagnosticKind::DK_Generic;
+ case RemarkKind::OptimizationRemarkPass:
+ return llvm::DiagnosticKind::DK_OptimizationRemark;
+ case RemarkKind::OptimizationRemarkMissed:
+ return llvm::DiagnosticKind::DK_OptimizationRemarkMissed;
+ case RemarkKind::OptimizationRemarkFailure:
+ return llvm::DiagnosticKind::DK_OptimizationFailure;
+ case RemarkKind::OptimizationRemarkAnalysis:
+ return llvm::DiagnosticKind::DK_OptimizationRemarkAnalysis;
+ }
+ llvm_unreachable("Unknown diagnostic kind");
+ }
+};
+
+// clang-format off
+template <class RemarkT>
+decltype(auto) operator<<(
+ RemarkT &&r,
+ std::enable_if_t<std::is_base_of_v<RemarkBase,
+ std::remove_reference_t<RemarkT>>,
+ StringRef>
+ s) {
+ r.insert(s);
+ return std::forward<RemarkT>(r);
+}
+
+template <class RemarkT>
+decltype(auto) operator<<(
+ RemarkT &&r,
+ std::enable_if_t<std::is_base_of_v<RemarkBase,
+ std::remove_reference_t<RemarkT>>,
+ RemarkBase::RemarkKeyValue>
+ a) {
+ r.insert(std::move(a));
+ return std::forward<RemarkT>(r);
+}
+
+template <class RemarkT>
+decltype(auto) operator<<(
+ RemarkT &&r,
+ std::enable_if_t<std::is_base_of_v<RemarkBase,
+ std::remove_reference_t<RemarkT>>,
+ RemarkBase::SetIsVerbose>
+ v) {
+ r.insert(v);
+ return std::forward<RemarkT>(r);
+}
+
+template <class RemarkT>
+decltype(auto) operator<<(
+ RemarkT &&r,
+ std::enable_if_t<std::is_base_of_v<RemarkBase,
+ std::remove_reference_t<RemarkT>>,
+ RemarkBase::SetExtraArgs>
+ ea) {
+ r.insert(ea);
+ return std::forward<RemarkT>(r);
+}
+// clang-format on
+
+//===----------------------------------------------------------------------===//
+// Shorthand aliases for different kinds of remarks.
+//===----------------------------------------------------------------------===//
+
+template <RemarkKind K, DiagnosticSeverity S>
+class OptRemarkBase final : public RemarkBase {
+public:
+ explicit OptRemarkBase(Location loc, StringRef passName, StringRef remarkName)
+ : RemarkBase(K, S, passName.data(), remarkName, loc) {}
+
+ bool isEnabled() const override { return true; }
+};
+
+using OptRemarkAnalysis = OptRemarkBase<RemarkKind::OptimizationRemarkAnalysis,
+ DiagnosticSeverity::Remark>;
+
+using OptRemarkPass = OptRemarkBase<RemarkKind::OptimizationRemarkPass,
+ DiagnosticSeverity::Remark>;
+
+using OptRemarkMissed = OptRemarkBase<RemarkKind::OptimizationRemarkMissed,
+ DiagnosticSeverity::Remark>;
+
+using OptRemarkFailure = OptRemarkBase<RemarkKind::OptimizationRemarkFailure,
+ DiagnosticSeverity::Remark>;
+
+class RemarkEngine;
+
+//===----------------------------------------------------------------------===//
+// InFlightRemark
+//===----------------------------------------------------------------------===//
+
+/// InFlightRemark is a RAII class that holds a reference to a RemarkBase
+/// instance and allows to build the remark using the << operator. The remark
+/// is emitted when the InFlightRemark instance is destroyed, which happens
+/// when the scope ends or when the InFlightRemark instance is moved.
+/// Similar to InFlightDiagnostic, but for remarks.
+class InFlightRemark {
+public:
+ explicit InFlightRemark(RemarkBase *diag) : remark(diag) {}
+ InFlightRemark(RemarkEngine &eng, RemarkBase *diag)
+ : owner(&eng), remark(diag) {}
+
+ InFlightRemark() = default; // empty ctor
+
+ template <typename T>
+ InFlightRemark &operator<<(T &&arg) {
+ if (remark)
+ *remark << std::forward<T>(arg);
+ return *this;
+ }
+
+ explicit operator bool() const { return remark != nullptr; }
+
+ ~InFlightRemark();
+
+ InFlightRemark(const InFlightRemark &) = delete;
+ InFlightRemark &operator=(const InFlightRemark &) = delete;
+ InFlightRemark(InFlightRemark &&) = default;
+ InFlightRemark &operator=(InFlightRemark &&) = default;
+
+private:
+ RemarkEngine *owner{nullptr};
+ std::unique_ptr<RemarkBase> remark;
+};
+
+//===----------------------------------------------------------------------===//
+// MLIR Remark Streamer
+//===----------------------------------------------------------------------===//
+
+class MLIRRemarkStreamer {
+ llvm::remarks::RemarkStreamer &remarkStreamer;
+
+public:
+ explicit MLIRRemarkStreamer(llvm::remarks::RemarkStreamer &remarkStreamer)
+ : remarkStreamer(remarkStreamer) {}
+
+ void streamOptimizationRemark(const RemarkBase &remark);
+};
+
+//===----------------------------------------------------------------------===//
+// Remark Engine (MLIR Context will own this class)
+//===----------------------------------------------------------------------===//
+
+class RemarkEngine {
+private:
+ /// The category for missed optimization remarks.
+ std::optional<llvm::Regex> missFilter;
+ /// The category for passed optimization remarks.
+ std::optional<llvm::Regex> passFilter;
+ /// The category for analysis remarks.
+ std::optional<llvm::Regex> analysisFilter;
+ /// The category for failed optimization remarks.
+ std::optional<llvm::Regex> failedFilter;
+ /// The output file for the remarks.
+ std::unique_ptr<llvm::ToolOutputFile> remarksFile;
+ /// The MLIR remark streamer that will be used to emit the remarks.
+ std::unique_ptr<MLIRRemarkStreamer> remarkStreamer;
+ std::unique_ptr<llvm::remarks::RemarkStreamer> llvmRemarkStreamer;
+ /// When is enabled, engine also prints remarks as mlir::emitRemarks.
+ bool printAsEmitRemarks;
+ /// The main MLIR remark streamer that will be used to emit the remarks.
+ MLIRRemarkStreamer *getLLVMRemarkStreamer() { return remarkStreamer.get(); }
+ const MLIRRemarkStreamer *getLLVMRemarkStreamer() const {
+ return remarkStreamer.get();
+ }
+ void
+ setLLVMRemarkStreamer(std::unique_ptr<MLIRRemarkStreamer> remarkStreamer) {
+ this->remarkStreamer = std::move(remarkStreamer);
+ }
+
+ /// Get the main MLIR remark streamer that will be used to emit the remarks.
+ llvm::remarks::RemarkStreamer *getMainRemarkStreamer() {
+ return llvmRemarkStreamer.get();
+ }
+ const llvm::remarks::RemarkStreamer *getMainRemarkStreamer() const {
+ return llvmRemarkStreamer.get();
+ }
+ /// Set the main remark streamer to be used by the engine.
+ void setMainRemarkStreamer(
+ std::unique_ptr<llvm::remarks::RemarkStreamer> mainRemarkStreamer) {
+ llvmRemarkStreamer = std::move(mainRemarkStreamer);
+ }
+
+ /// Return true if missed optimization remarks are enabled, override
+ /// to provide different implementation.
+ bool isMissedOptRemarkEnabled(StringRef categoryName) const;
+
+ /// Return true if passed optimization remarks are enabled, override
+ /// to provide different implementation.
+ bool isPassedOptRemarkEnabled(StringRef categoryName) const;
+
+ /// Return true if analysis optimization remarks are enabled, override
+ /// to provide different implementation.
+ bool isAnalysisOptRemarkEnabled(StringRef categoryName) const;
+
+ /// Return true if analysis optimization remarks are enabled, override
+ /// to provide different implementation.
+ bool isFailedOptRemarkEnabled(StringRef categoryName) const;
+
+ /// Return true if any type of remarks are enabled for this pass.
+ bool isAnyRemarkEnabled(StringRef categoryName) const {
+ return (isMissedOptRemarkEnabled(categoryName) ||
+ isPassedOptRemarkEnabled(categoryName) ||
+ isFailedOptRemarkEnabled(categoryName) ||
+ isAnalysisOptRemarkEnabled(categoryName));
+ }
+
+ /// Emit a remark using the given maker function, which should return
+ /// a RemarkBase instance. The remark will be emitted using the main
+ /// remark streamer.
+ template <typename RemarkT, typename... Args>
+ InFlightRemark makeRemark(Args &&...args);
+
+ template <typename RemarkT>
+ InFlightRemark
+ emitIfEnabled(Location loc, StringRef passName, StringRef category,
+ bool (RemarkEngine::*isEnabled)(StringRef) const);
+
+public:
+ RemarkEngine() = delete;
+ /// Constructs Remark engine with optional category names
+ RemarkEngine(bool printAsEmitRemarks,
+ std::optional<std::string> categoryPassName = std::nullopt,
+ std::optional<std::string> categoryMissName = std::nullopt,
+ std::optional<std::string> categoryAnalysisName = std::nullopt,
+ std::optional<std::string> categoryFailedName = std::nullopt);
+
+ /// Destructor that will close the output file and reset the
+ /// main remark streamer.
+ ~RemarkEngine();
+
+ /// Setup the remark engine with the given output path and format.
+ llvm::Error initialize(StringRef outputPath, StringRef outputFormat);
+
+ /// Report a diagnostic remark.
+ void report(const RemarkBase &&diag);
+
+ /// Report a successful remark, this will create an InFlightRemark
+ /// that can be used to build the remark using the << operator.
+ InFlightRemark emitOptimizationRemark(Location loc, StringRef passName,
+ StringRef category);
+ /// Report a missed optimization remark
+ /// that can be used to build the remark using the << operator.
+ InFlightRemark emitOptimizationRemarkMiss(Location loc, StringRef passName,
+ StringRef category);
+ /// Report a failed optimization remark, this will create an InFlightRemark
+ /// that can be used to build the remark using the << operator.
+ InFlightRemark emitOptimizationRemarkFailure(Location loc, StringRef passName,
+ StringRef category);
+ /// Report an analysis remark, this will create an InFlightRemark
+ /// that can be used to build the remark using the << operator.
+ InFlightRemark emitOptimizationRemarkAnalysis(Location loc,
+ StringRef passName,
+ StringRef category);
+};
+
+//===----------------------------------------------------------------------===//
+// Emitters
+//===----------------------------------------------------------------------===//
+
+using Suggestion = RemarkBase::RemarkKeyValue;
+inline Suggestion suggest(StringRef txt) { return {"Suggestion", txt}; }
+
+template <typename Fn, typename... Args>
+[[nodiscard]] inline InFlightRemark withEngine(Fn fn, Location loc,
+ Args &&...args) {
+ MLIRContext *ctx = loc->getContext();
+
+ auto &enginePtr = ctx->getRemarkEngine();
+
+ if (RemarkEngine *eng = enginePtr.get())
+ return (eng->*fn)(loc, std::forward<Args>(args)...);
+
+ return {};
+}
+
+/// Report an optimization remark that was passed.
+inline InFlightRemark reportOptimizationPass(Location loc, StringRef cat,
+ StringRef passName) {
+ return withEngine(&RemarkEngine::emitOptimizationRemark, loc, passName, cat);
+}
+
+/// Report an optimization remark that was missed.
+inline InFlightRemark reportOptimizationMiss(Location loc, StringRef cat,
+ StringRef passName,
+ StringRef suggestion) {
+ auto r =
+ withEngine(&RemarkEngine::emitOptimizationRemarkMiss, loc, passName, cat);
+ if (r)
+ r << suggest(suggestion);
+ return r;
+}
+/// Report an optimization failure remark.
+inline InFlightRemark reportOptimizationFail(Location loc, StringRef cat,
+ StringRef passName) {
+ return withEngine(&RemarkEngine::emitOptimizationRemarkFailure, loc, passName,
+ cat);
+}
+
+/// Report an optimization analysis remark.
+inline InFlightRemark reportOptimizationAnalysis(Location loc, StringRef cat,
+ StringRef passName) {
+ return withEngine(&RemarkEngine::emitOptimizationRemarkAnalysis, loc,
+ passName, cat);
+}
+
+} // namespace mlir
+
+#endif // MLIR_IR_REMARKS_H
\ No newline at end of file
diff --git a/mlir/lib/IR/CMakeLists.txt b/mlir/lib/IR/CMakeLists.txt
index 3ef69cea18f0a..438e486f2ad4c 100644
--- a/mlir/lib/IR/CMakeLists.txt
+++ b/mlir/lib/IR/CMakeLists.txt
@@ -33,6 +33,7 @@ add_mlir_library(MLIRIR
PatternMatch.cpp
Region.cpp
RegionKindInterface.cpp
+ Remarks.cpp
SymbolTable.cpp
TensorEncoding.cpp
Types.cpp
@@ -69,5 +70,10 @@ add_mlir_library(MLIRIR
LINK_LIBS PUBLIC
MLIRSupport
+
+ LINK_COMPONENTS
+ Remarks
+ Core
+ BitstreamReader
)
diff --git a/mlir/lib/IR/MLIRContext.cpp b/mlir/lib/IR/MLIRContext.cpp
index 2d5381d43f863..5a37de1481eb9 100644
--- a/mlir/lib/IR/MLIRContext.cpp
+++ b/mlir/lib/IR/MLIRContext.cpp
@@ -25,12 +25,14 @@
#include "mlir/IR/Location.h"
#include "mlir/IR/OpImplementation.h"
#include "mlir/IR/OperationSupport.h"
+#include "mlir/IR/Remarks.h"
#include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/Twine.h"
#include "llvm/Support/Allocator.h"
#inclu...
[truncated]
|
joker-eph
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Some quick comments inline.
Can you write a doc in the mlir/docs folder?
I suspect it may be suitable as a new section in mlir/docs/Diagnostics.md (or linked from there otherwise).
+1 I am not familiar with the LLVM remark infrastructure and I am curious what I could use this for in MLIR. |
This patch hooks LLVM’s remarks infrastructure into MLIR.
* Disabled by default, zero-cost when off.
Enable per context with `MLIRContext::setupOptimizationRemarks(...)`.
* YAML or bitstream output: via new `MLIRRemarkStreamer`
(subclasses `llvm::remarks::RemarkStreamer`).
* Implements simple APIs for passes:
User-code can directly prints remarks like:
```
reportOptimizationPass(loc, categoryName, "MyPass") << "vectorized";
reportOptimizationMiss(loc, categoryLoopUnroll, "MyPass",
"try increasing unroll factor") << "...";
reportOptimizationFail(loc, categoryLoopUnroll, "MyPass") << "...";
reportOptimizationAnalysis(loc, categoryLoopUnroll, "MyPass")
<< "estimated trip count: " << tripCount;
```
I added a doc. |
|
I've updated the PR. I addressed the concerns. |
joker-eph
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(apparently I had unflushed comment, they may or may not be relevant, but flushing now)
razvanlupusoru
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Guray for your contribution! The latest design with the decoupled RemarkEngine and Streamer looks great.
From an implementation perspective, my only concern is that MLIR Diagnostics already supports "Remarks" as a type of message that can be emitted, which might confuse a compiler developer trying to emit such messages. So River's concerns seem valid about the framework duplications. However, one way the separation makes sense is if the requirements and documentation of the Remarks framework make it clear that these are user-facing messages. Thus, a set of guidelines around messages would really benefit this framework; and I would be happy to help contribute this.
Most of my comments are less about the implementation and more about this API in MLIR. This API should be easy for a compiler developer to use without ambiguities in reporting, and it should avoid words like "pass" and "optimization," which suggest only a subset of transformations that an MLIR-based compiler can handle. My suggestions are not about adding functionality but about removing unnecessary elements and clarifying the wording.
| namespace mlir::remark { | ||
|
|
||
| /// Defines different remark kinds that can be used to categorize remarks. | ||
| enum class RemarkKind { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please document what each of these kinds means?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's happening to "unknown" remarks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put unknown to avoid UB as a good practice in case it's forgotten to set.
but we don't have a way to generate unknown remarks today.
|
@joker-eph let me know if it's all good. ping @razvanlupusoru @River707 |
razvanlupusoru
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Guray. I marked my requests for fewer remark kinds as resolved yesterday due to your justifications. I have just a few remaining small requests - but otherwise looks good to me. I am traveling so I won't look at this again for at least 1.5 weeks - and no need to wait for me as long as you get a +1 from the other reviewers.
|
ping reviewers @River707 @jpienaar @joker-eph |
joker-eph
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The MLIRContext changes are minimal, and everything seems well decoupled from a dependency point of view, so that all LGTM.
There are just a few comment threads that are still open at the moment.
joker-eph
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see anything left opened right now, LGTM.
It would be great to get another approval than mine still.
| namespace mlir::remark { | ||
|
|
||
| /// Defines different remark kinds that can be used to categorize remarks. | ||
| enum class RemarkKind { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's happening to "unknown" remarks?
|
Could we plug this into |
Yes, I thought it's better to do that in a follow-up PR so it's easy to read. |
|
Lg |
|
Aren't LLVM docs supposed to be in rst format, not md? I know LLMs prefer markdown but the sphinx doc generation uses rst. |
|
This is MLIR: everything is markdown |
Should we move all of LLVM docs to markdown too? It's a bit odd to have two different documentation formats in |
|
I think that's a different topic, and it's probably not the right place to discuss. A RFC on Discourse would be better suited, and I suspect such a change would need to be motivated on its own: just saying we need to convert all of LLVM docs just because MLIR is using markdown seems a bit weak considering the effort, and there are so many things to improve already to the LLVM website and doc that this may not be where we need to invest the effort right now. |
This PR implements structured, tooling-friendly optimization remarks with zero cost unless enabled. It implements:
RemarkEnginecollects finalized remarks withinMLIRContext.MLIRRemarkStreamerBaseabstract class streams them to a backend.MLIRLLVMRemarkStreamer(bridges to llvm::remarks → YAML/Bitstream) or your own custom streamer.Overview
Enable Remark engine and Plug LLVM's Remark streamer
API to emit remark